81 research outputs found

    A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection

    Get PDF
    We propose a new space-variant anisotropic regularisation term for variational image restoration, based on the statistical assumption that the gradients of the target image distribute locally according to a bivariate generalised Gaussian distribution. The highly flexible variational structure of the corresponding regulariser encodes several free parameters which hold the potential for faithfully modelling the local geometry in the image and describing local orientation preferences. For an automatic estimation of such parameters, we design a robust maximum likelihood approach and report results on its reliability on synthetic data and natural images. For the numerical solution of the corresponding image restoration model, we use an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM). A suitable preliminary variable splitting together with a novel result in multivariate non-convex proximal calculus yield a very efficient minimisation algorithm. Several numerical results showing significant quality-improvement of the proposed model with respect to some related state-of-the-art competitors are reported, in particular in terms of texture and detail preservation

    Masked unbiased principles for parameter selection in variational image restoration under Poisson noise

    Get PDF
    In this paper we address the problem of automatically selecting the regularization parameter in variational models for the restoration of images corrupted by Poisson noise. More specifically, we first review relevant existing unmasked selection criteria which fully exploit the acquired data by considering all pixels in the selection procedure. Then, based on an idea originally proposed by Carlavan and Blanc-Feraud to effectively deal with dark backgrounds and/or low photon-counting regimes, we introduce and discuss the masked versions—some of them already existing—of the considered unmasked selection principles formulated by simply discarding the pixels measuring zero photons. However, we prove that such a blind masking strategy yields a bias in the resulting principles that can be overcome by introducing a novel positive Poisson distribution correctly modeling the statistical properties of the undiscarded noisy data. Such distribution is at the core of newly proposed masked unbiased counterparts of the discussed strategies. All the unmasked, masked biased and masked unbiased principles are extensively compared on the restoration of different images in a wide range of photon-counting regimes. Our tests allow to conclude that the novel masked unbiased selection strategies, on average, compare favorably with unmasked and masked biased counterparts

    Space-variant Generalized Gaussian Regularization for Image Restoration

    Full text link
    We propose a new space-variant regularization term for variational image restoration based on the assumption that the gradient magnitudes of the target image distribute locally according to a half-Generalized Gaussian distribution. This leads to a highly flexible regularizer characterized by two per-pixel free parameters, which are automatically estimated from the observed image. The proposed regularizer is coupled with either the L2L_2 or the L1L_1 fidelity terms, in order to effectively deal with additive white Gaussian noise or impulsive noises such as, e.g, additive white Laplace and salt and pepper noise. The restored image is efficiently computed by means of an iterative numerical algorithm based on the alternating direction method of multipliers. Numerical examples indicate that the proposed regularizer holds the potential for achieving high quality restorations for a wide range of target images characterized by different gradient distributions and for the different types of noise considered

    A general framework for nonlinear regularized Krylov-based image restoration

    Get PDF
    Abstract. This paper introduces a new approach to computing an approximate solution of Tikhonov-regularized large-scale ill-posed problems with a general nonlinear regularization operator. The iterative method applies a sequence of projections onto generalized Krylov subspaces using a semi-implicit approach to deal with the nonlinearity in the regularization term. A suitable value of the regularization parameter is determined by the discrepancy principle. Computed examples illustrate the performance of the method applied to the restoration of blurred and noisy images

    ETNA. - ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS

    No full text
    Electronic Transactions on Numerical Analysis (ETNA) is an electronic journal for the publication of significant new developments in numerical analysis and scientific computing. Papers of the highest quality that deal with the analysis of algorithms for the solution of continuous models and numerical linear algebra are appropriate for ETNA, as are papers of similar quality that discuss implementation and performance of such algorithms. New algorithms for current or new computer architectures are appropriate provided that they are numerically sound. However, the focus of the publication should be on the algorithm rather than on the architecture. The journal is published by the Kent State University Library in conjunction with the Institute of Computational Mathematics at Kent State University. Reviews of all ETNA papers appear in Mathematical Reviews and Zentralblatt fĂĽr Mathematik. Reference information for ETNA papers also appears in the expanded Science Citation Index. ETNA is registered with the Library of Congress and has ISSN 1068-9613

    Sviluppo di metodi di classificazione corretta di segnali relativi ai codici a barre, in presenza di rumore ed elevati livelli di sfocatura

    No full text
    \u201cSviluppo di metodi di classificazione corretta di segnali relativi ai codici a barre, in presenza di rumore ed elevati livelli di sfocatura\u201d Prefazione: Attualmente i limiti di decodifica di codici a barre sono determinati da due fattori principali: risoluzione e sfocatura derivanti dal sistema di acquisizione. Mentre per la risoluzione i dispositivi 2D sono sufficientemente robusti in quanto caratterizzati da algoritmi di ricostruzione particolarmente efficienti, la sfocatura rappresenta il problema da affrontare e risolvere in quanto limita in modo sostanziale le prestazioni di decodifica. La sfocatura unita all\u2019inevitabile rumore dei segnali coinvolti, comporta la perdita di parte dell\u2019informazione che impedisce di classificare in modo corretto le \u201ccodewords\u201d del codice. Obiettivo: Lo scopo principale di tale lavoro \ue8 rivolto ad aumentare le prestazioni di classificazione corretta dei segnali affetti da rumore ed alti livelli di sfocatura. In particolare gli attuali algoritmi di classificazione dei segnali ricevuti hanno come elemento propedeutico al successo, la presenza, all\u2019interno del segnali stessi, di tutti i fronti corrispondenti alle transazioni barra-spazio, tipiche di un codice a barre lineare. Questo comporta che sia inevitabile adottare algoritmi in grado di recuperare tali transazioni soprattutto in presenza di blur e rumore. Le principali alterazioni che il fenomeno di sfocatura determina sono la scomparsa di alcuni fronti del segnale e l\u2019alterazione delle loro mutue distanze. Sostanzialmente il fenomeno \ue8 riconducibile alle alterazioni di un canale trasmissivo generico evidenziate e ben modellate tramite i \u201cdiagrammi ad occhio\u201d volti a mettere in luce il fenomeno di \u201cinterferenza di intersimbolo\u201d. Le alterazioni introdotte sono caratterizzate principalmente da due fattori, presenza di rumore e distorsioni dovute soprattutto all\u2019ottica. Queste ultime in particolare possono essere modellate da funzioni di trasferimento specifiche le cui caratteristiche quindi sono note. Essendo appunto note \ue8 possibile, in linea di principio, considerarle nel processo di analisi e classificazione del segnale, senza necessariamente dover ricorrere a metodi di equalizzazione del segnale stesso, ma considerando invece metodi funzionali che tengono conto di possibili invarianze ad esse e volti alla minimizzazione dell\u2019errore di decisione (classificazione)

    Numerical Mathematics: Theory, Methods and Applications

    No full text
    Numerical Mathematics: Theory, Methods and Applications (NM-TMA) publishes high-quality original research papers on the construction, analysis and application of numerical methods for solving scientific and engineering problems. Important research and expository papers devoted to the numerical solution of mathematical equations arising in all areas of science and technology are expected. The journal originates from the journal Numerical Mathematics: A Journal of Chinese Universities (English Edition). NM-TMA is a refereed international journal sponsored by Nanjing University and the Ministry of Education of China. As an international journal, NM-TMA is published in a timely fashion in printed and electronic forms
    • …
    corecore